Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

نویسندگان

  • Adil Salim
  • Pascal Bianchi
  • Walid Hachem
چکیده

A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as “Snake”, is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Precision Matrix Computation via Stochastic Optimization A Scalable Computation of Regularized Precision Matrices via Stochastic Optimization

We consider the problem of computing a positive definite p × p inverse covariance matrix aka precision matrix θ = (θij) which optimizes a regularized Gaussian maximum likelihood problem, with the elastic-net regularizer ∑p i,j=1 λ(α|θij |+ 1 2 (1− α)θ 2 ij), with regularization parameters α ∈ [0, 1] and λ > 0. The associated convex semidefinite optimization problem is notoriously difficult to s...

متن کامل

Stochastic Coordinate Descent Methods for Regularized Smooth and Nonsmooth Losses

Stochastic Coordinate Descent (SCD) methods are among the first optimization schemes suggested for efficiently solving large scale problems. However, until now, there exists a gap between the convergence rate analysis and practical SCD algorithms for general smooth losses and there is no primal SCD algorithm for nonsmooth losses. In this paper, we discuss these issues using the recently develop...

متن کامل

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known as regularized empirical risk minimization. We propose and analy...

متن کامل

Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction

Regularized empirical risk minimization (R-ERM) is an important branch of machine learning, since it constrains the capacity of the hypothesis space and guarantees the generalization ability of the learning algorithm. Two classic proximal optimization algorithms, i.e., proximal stochastic gradient descent (ProxSGD) and proximal stochastic coordinate descent (ProxSCD) have been widely used to so...

متن کامل

On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization

We consider a wide spectrum of regularized stochastic minimization problems, where the regularization term is composite with a linear function. Examples of this formulation include graphguided regularized minimization, generalized Lasso and a class of `1 regularized problems. The computational challenge is that the closed-form solution of the proximal mapping associated with the regularization ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1712.07027  شماره 

صفحات  -

تاریخ انتشار 2017